Goto

Collaborating Authors

 liability regime


Liability regimes in the age of AI: a use-case driven analysis of the burden of proof

Llorca, David Fernández, Charisi, Vicky, Hamon, Ronan, Sánchez, Ignacio, Gómez, Emilia

arXiv.org Artificial Intelligence

New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better. In particular, data-driven learning approaches (i.e., Machine Learning (ML)) have been a true revolution in the advancement of multiple technologies in various application domains. But at the same time there is growing concern about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights. Although there are mechanisms in the adoption process to minimize these risks (e.g., safety regulations), these do not exclude the possibility of harm occurring, and if this happens, victims should be able to seek compensation. Liability regimes will therefore play a key role in ensuring basic protection for victims using or interacting with these systems. However, the same characteristics that make AI systems inherently risky, such as lack of causality, opacity, unpredictability or their self and continuous learning capabilities, may lead to considerable difficulties when it comes to proving causation. This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties. Specifically, we address the cases of cleaning robots, delivery drones and robots in education. The outcome of the proposed analysis suggests the need to revise liability regimes to alleviate the burden of proof on victims in cases involving AI technologies.


Liability Regimes in the Age of AI: a Use-Case Driven Analysis of the Burden of Proof

Fernández Llorca, David (a:1:{s:5:"en_US";s:42:"European Commission, Joint Research Centre";}) | Charisi, Vicky | Hamon, Ronan | Sánchez, Ignacio | Gómez, Emilia

Journal of Artificial Intelligence Research

New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better. In particular, data-driven learning approaches (i.e., Machine Learning (ML)) have been a true revolution in the advancement of multiple technologies in various application domains. But at the same time there is growing concern about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights. Although there are mechanisms in the adoption process to minimize these risks (e.g., safety regulations), these do not exclude the possibility of harm occurring, and if this happens, victims should be able to seek compensation. Liability regimes will therefore play a key role in ensuring basic protection for victims using or interacting with these systems. However, the same characteristics that make AI systems inherently risky, such as lack of causality, opacity, unpredictability or their self and continuous learning capabilities, may lead to considerable difficulties when it comes to proving causation. This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties. Specifically, we address the cases of cleaning robots, delivery drones and robots in education. The outcome of the proposed analysis suggests the need to revise liability regimes to alleviate the burden of proof on victims in cases involving AI technologies. This article appears in the AI & Society track.


The liability regime for AI systems – DPOblog

#artificialintelligence

To the extent necessary to bring a claim for damages, Art. 3 AI Liability Directive allows a court to order the disclosure of relevant evidence for certain high-risk AI systems. Blanket requests for evidence are not permitted and disclosure must be limited to what is necessary. This is to reconcile the conflicting interests of the parties, as disclosure always also constitutes a disclosure of trade secrets. In this respect, the defendant is also to be given a legal remedy against the court's order (Art. If a defendant does not comply with the court's order for disclosure, the presumption rule under Art.

  Industry: Law (1.00)

Regulating the future: A look at the EU's plan to reboot product liability rules for AI

#artificialintelligence

A recently presented European Union plan to update long-standing product liability rules for the digital age -- including addressing rising use of artificial intelligence (AI) and automation -- took some instant flak from European consumer organization, BEUC, which framed the update as something of a downgrade by arguing EU consumers will be left less well protected from harms caused by AI services than other types of products. For a flavor of the sorts of AI-driven harms and risks that may be fuelling demands for robust liability protections, only last month the UK's data protection watchdog issued a blanket warning over pseudoscientific AI systems that claim to perform'emotional analysis' -- urging such tech should not be used for anything other than pure entertainment. While on the public sector side, back in 2020, a Dutch court found an algorithmic welfare risk assessment for social security claimants breached human rights law. And, in recent years, the UN has also warned over the human rights risks of automating public service delivery. Additionally, US courts' use of blackbox AI systems to make sentencing decisions -- opaquely baking in bias and discrimination -- has been a tech-enabled crime against humanity for years. BEUC, an umbrella consumer group which represents 46 independent consumer organisations from 32 countries, had been calling for years for an update to EU liability laws to take account of growing applications of AI and ensure consumer protections laws are not being outpaced.


EU proposes new approach to liability for artificial intelligence systems

#artificialintelligence

The European Commission has published (28 September 2022) proposals for adapting civil litigation rules in European Union Member States – and in the European Economic Area – to reduce perceived difficulties in claiming non-contractual damages for harm caused by artificial intelligence (AI). The proposal sits alongside wider reforms to the product liability regime. Both are closely intertwined with the EU's proposed AI Act. The AI liability reforms are aimed at making it less burdensome for claimants to secure compensation, with the intention of promoting trust in this increasingly pervasive technology. Claimants in civil law systems (typically without common law-style disclosure obligations) often have much less information than the defendant about the events that they believe have caused harm to them.


LEAK: Commission to propose rebuttable presumption for AI-related damages

#artificialintelligence

The European Commission will present a liability regime targeted to damage originating from Artificial Intelligence (AI) that would put causality presumption on the defendant, according to a draft obtained by EURACTIV. The AI Liability Directive is scheduled to be published on 28 September, and it is meant to complement the Artificial Intelligence Act, an upcoming regulation that introduces requirements for AI systems based on their level of risk. "This directive provides in a very targeted and proportionate manner alleviations of the burden of proof through the use of disclosure and rebuttable presumptions," the draft reads. "These measures will help persons seeking compensation for damage caused by AI systems to handle their burden of proof so that justified liability claims can be successful ." The proposal follows the European Parliament's own-initiative resolution adopted in October 2020 that called for facilitating the burden of proof and a strict liability regime for AI-enabled technologies.


Research study on the legal liability of autonomous robotics

#artificialintelligence

I found really interesting a study from 2020, titled "Legal liability for Autonomous Robotics", made by Dr. Safaa Fatouh Gomaa, Member of the Faculty of Law of the Egyptian Mansoura University, a study related to legal issues regarding liability related to Artificial Intelligence products, but more specifically, in the production of autonomous robotics. According to the European resolutions of 2017 and 2018, according to Gomaa, the liability rules cover cases where the cause of the robot's actions or missteps can be attributed to a specific human agent such as the manufacturer, the machinist, the holder or the manager, and where this representative could have foreseen and circumvented the robot's dangerous conduct. He also adds that since digital technologies are constantly evolving, due to, patches, updates and software extensions, influencing the behaviour of all mechanisms of the system, it is crucial to identify responsibilities among the different actors in the AI supply chain. Given the complexity of the topic to be covered, the researcher has divided the paper into three sections; section 1 is the historical, international and legal framework for Robots; section 2 is about identifying the legal responsibility for autonomous industrial robotics; finally, section 3 gives an overview of his conclusions. Robot concepts began as legends.


Liability for artificial intelligence -- Why Canadian businesses should pay attention to recent developments in Europe Inside Internal Controls

#artificialintelligence

Late last year, the European Commission's Expert Group on Liability and New Technologies – New Technologies Formation (NTF) released a report on Liability for Artificial Intelligence. The report focuses on liability regimes across European Union (EU) member states and offers high-level recommendations on how those liability regimes can be adapted to meet challenges posed by artificial intelligence (AI) and other digital technologies. Insights from this report may inform legislative and regulatory changes in the EU and elsewhere, including in Canada. Here's what you need to know. The NTF first convened in June 2018.